65 research outputs found

    Death from the Perspective of Luhmann’s System Theory

    Get PDF
    The aim of this article is to address the topic of death from a Luhmannian perspective. First, the article will introduce the general theory of Luhmann to provide a background for the way he is tackling sociological and philosophical problems and then will describe its application to religion and deduce various implications for the topic of death. For the discussion of death, we will refer to some of Hegel’s insights, as they motivated central parts of Luhmann’s theory, though he replaced the Hegelian notions with system theoretical ones. Even if it might seem like a further abstraction and mechanization, we assume that it significantly facilitates the combination of outside and inside perspectives on death. In contrast to philosophical existentialism, the system-oriented approach of Luhmann does not emphasize the situated character of human reason and its gaining authenticity by facing death and finitude. Instead, it points to the entanglement of society and consciousness, focusing on the former while providing hints to the otherness of consciousness. Here, authenticity is not achieved by writing about existential topics, but rather through some sort of parallax view

    Challenges and Legal Gaps of Genetic Profiling in the Era of Big Data

    Get PDF
    Profiling of individuals based on inborn, acquired, and assigned characteristics is central for decision making in health care. In the era of omics and big smart data, it becomes urgent to differentiate between different data governance affordances for different profiling activities. Typically, diagnostic profiling is in the focus of researchers and physicians, and other types are regarded as undesired side-effects; for example, in the connection of health care insurance risk calculations. Profiling in a legal sense is addressed, for example, by the EU data protection law. It is defined in the General Data Protection Regulation as automated decision making. This term does not correspond fully with profiling in biomedical research and healthcare, and the impact on privacy has hardly ever been examined. But profiling is also an issue concerning the fundamental right of non-discrimination, whenever profiles are used in a way that has a discriminatory effect on individuals. Here, we will focus on genetic profiling, define related notions as legal and subject-matter definitions frequently differ, and discuss the ethical and legal challenges

    A systematic overview on methods to protect sensitive data provided for various analyses

    Get PDF
    In view of the various methodological developments regarding the protection of sensitive data, especially with respect to privacy-preserving computation and federated learning, a conceptual categorization and comparison between various methods stemming from different fields is often desired. More concretely, it is important to provide guidance for the practice, which lacks an overview over suitable approaches for certain scenarios, whether it is differential privacy for interactive queries, k-anonymity methods and synthetic data generation for data publishing, or secure federated analysis for multiparty computation without sharing the data itself. Here, we provide an overview based on central criteria describing a context for privacy-preserving data handling, which allows informed decisions in view of the many alternatives. Besides guiding the practice, this categorization of concepts and methods is destined as a step towards a comprehensive ontology for anonymization. We emphasize throughout the paper that there is no panacea and that context matters

    Embedding Risk-Based Anonymization into Data Access Control for Providing Individual-Level Health Data in a Secure Way

    Get PDF
    Especially in biomedical research, individual-level data must be protected due to the sensitivity of the data that is associated with patients. The broad goal of scientific data re-use is to allow many researchers to derive new hypotheses and insights from the data while preserving privacy. Data usage control (DUC) as an attribute-based access mechanism promises to overcome the limitations of traditional access control models achieving that goal. Park and Sandhu provided the usage control (UCON) model as an instance of DUC, which defines policies that evaluate certain attributes. Here, we present an UCON-based architecture, which is augmented with risk-based anonymization as provided by the R package sdcMicro and an extensible Access Control Markup Language (XACML) environment with a core policy decision point as implemented by authzforce

    Reconsidering Anonymization-Related Concepts and the Term "Identification" Against the Backdrop of the European Legal Framework

    Get PDF
    Sharing data in biomedical contexts has become increasingly relevant, but privacy concerns set constraints for free sharing of individual-level data. Data protection law protects only data relating to an identifiable individual, whereas "anonymous" data are free to be used by everybody. Usage of many terms related to anonymization is often not consistent among different domains such as statistics and law. The crucial term "identification" seems especially hard to define, since its definition presupposes the existence of identifying characteristics, leading to some circularity. In this article, we present a discussion of important terms based on a legal perspective that it is outlined before we present issues related to the usage of terms such as unique "identifiers," "quasi-identifiers," and "sensitive attributes." Based on these terms, we have tried to circumvent a circular definition for the term "identification" by making two decisions: first, deciding which (natural) identifier should stand for the individual; second, deciding how to recognize the individual. In addition, we provide an overview of anonymization techniques/methods for preventing re-identification. The discussion of basic notions related to anonymization shows that there is some work to be done in order to achieve a mutual understanding between legal and technical experts concerning some of these notions. Using a dialectical definition process in order to merge technical and legal perspectives on terms seems important for enhancing mutual understanding

    Medical Informatics in a Tension Between Black-Box AI and Trust

    Get PDF
    For medical informaticians, it became more and more crucial to assess the benefits and disadvantages of AI-based solutions as promising alternatives for many traditional tools. Besides quantitative criteria such as accuracy and processing time, healthcare providers are often interested in qualitative explanations of the solutions. Explainable AI provides methods and tools, which are interpretable enough that it affords different stakeholders a qualitative understanding of its solutions. Its main purpose is to provide insights into the black-box mechanism of machine learning programs. Our goal here is to advance the problem of qualitatively assessing AI from the perspective of medical informaticians by providing insights into the central notions, namely: explainability, interpretability, understanding, trust, and confidence

    Explaining Contextualized Word Embeddings in Biomedical Research – A Qualitative Investigation

    Get PDF
    Contextualized word embeddings proved to be highly successful quantitative representations of words that allow to efficiently solve various tasks such as clinical entity normalization in unstructured texts. In this paper, we investigate how the Saussurean sign theory can be used as a qualitative explainable AI method for word embeddings. Our assumption is that the main goal of XAI is to produce confidence and/or trust, which can be gained through quantitative as well as quantitative approaches. One important result is related to the fact that the differential structure of language as explained by Saussure corresponds to the possibility of adding and subtracting word embeddings. On the other hand, these mathematical structures provide insights into the inner workings of natural language

    The Sortal Concept in the Context of Biomedical Record Linkage

    Get PDF
    Biomedical Record Linkage is especially designed for linking data of patients in different data repositories. An important question in this context is whether singling-out is sufficient for identifying a patient, and if not, what is in general required for identification. To provide hints for an answer, we will extend previous works on the concept of identity and extend the sortal concept, stemming from analytical philosophy and upper-level ontologies. A sortal is a concept that is associated with an identity criterion. For example, the concept "set" has the identity criterion "having the same members". Based on a description of a record linkage setting, we operationalize the sortal concept by providing a distinction between the digital representation of a person (d-sortal) and the person in flesh (b-sortal)

    Maschinelle Lernverfahren für nieder- und hochdimensionale Probleme

    Get PDF
    In this habilitation thesis, problems in two different domains (record linkage and high-dimensional data) are addressed by using machine learning approaches. The assumption is that they lead to insights and solutions to which it would be difficult or even impossible to arrive with deterministic and classical statistical methods

    Combining techniques for screening and evaluating interaction terms on high-dimensional time-to-event data

    Get PDF
    BACKGROUND: Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. RESULTS: We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. CONCLUSION: Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones
    corecore